This forum is closed to new posts and
responses. Individual names altered for privacy purposes. The information contained in this website is provided for informational purposes only and should not be construed as a forum for customer support requests. Any customer support requests should be directed to the official HCL customer support channels below:
Obviously things like excessive deletion stubs and pre-existing database fragmentation are not an issue. But disk fragmentation could be an issue. The file is obviously going to start off very small and have to grow a lot, which means a lot of new disk allocations. You might want to see if your server disk is being defragged regularly, look into either DominoDefrag over on the OpenNTF site, or Defrag.NSF (commerical software from Preemptime), and/or create a "big empty NSF" and try importing into that.
I.e., on that latter point: create a new empty database and then fill it with a small number of very large documents. E.g., 10 documents with 100 MB of attachments in each one, to give you a 1 GB database. (Do this on a server that does not have DAOS, so the storage allocation is real.) Now delete the documents, and do not compact. You now have a big empty database. If you defrag your disk, you will have a big empty database occupying a large contiguous space on disk, and it won't have to grow during your import process so the writes will be about as fast as you can possibly get.
Of course, that's on top of other suggestions, but badly fragmented databases can have a pretty big impact on throughput, so if your code is already as optimized as it can get this would be a logical step.
-rich
Feedback response number WEBB95DQ4B created by ~Ned Nimfanakonyoopsi on 03/01/2013